dxdiag
in the search bar.https
, where as IP addr will default to http
.
Filename encryption is on our Roadmap and we have a working prototype already. https://s3drive.canny.io/feature-requests/p/filenamefilepath-encryption (ETA ~April 2023).
We're making further research to understand standards or well established implementation in that area, so we can stay compatible.
The sharing functionality is based on S3 presign URLs, their limitation is that the signature can't be valid longer than 7 days, so every 7 days new link would have to be generated. We're researching how to overcome this limitation. For instance we could combine this with a link shortener, so there is single link that doesn't change, but under the hood we would regenerate the destination link as needed.
The encrypted share link has the master key at the end after the # character and looks like this:
https://s3.us-west-004.backblazeb2.com/my-s3drive/.aashare/hsnwye5bno3p/index.html?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=004060aad6064900000000044%2F20230214%2Fus-west-004%2Fs3%2Faws4_request&X-Amz-Date=20230214T095014Z&X-Amz-Expires=604800&X-Amz-SignedHeaders=host&X-Amz-Signature=abdcd875e2106ee54c6a1d1851617c7e694e121464c5ca9023526ce2836be595#GKSGYX4HGNAd4nTcXb/GIA==
What it does it tries to load the encrypted asset as usual, but it's not aware per se if an asset is encrypted. In the background JavaScript tries to fetch the asset and replaces the one on the screen with decrypted version. It looks like it has failed on your side. Can you go to the console (right-click -> inspect element) to see if there is anything abnormal (that is error in the Console or different than 200 status code in any of the network requests).rclone
for accessing my files or backing up my photos on a day to day basis.... and I am not afraid of CLIs. (edited)garage
... and there is no way to provide region in S3Drive.
It seems that we may add additional form field to specify the region.toml
file like this: s3_region = "us-east-1"
.
We auto-detect region from the endpoint URL and have a way to detect custom region from MinIO.... and if it doesn't work we use the most common default which is us-east-1
..aa*
file and folder are about, but some "don't touch my bucket" parameter would be nice if the app doesn't strictly need them, otherwise that sounds like an additional bucket policy :D
EDIT: looks like the file is for some kind of init feature within the app, and one of the two folders is the trash. I've seen the versioning feature request, but the trash folder could be opt-in if possible. (edited).aainit
file is our write test, as well as ETag response validation (which is required for not yet released syncing features), as some providers (talking mostly about iDrive E2 with SSE enabled) don't generate valid ETags. BTW. Would you like S3Drive to support read-only mode?
Regardless, we will try to improve clarity of this operation, so user feels more confident that we're not doing some shady write/reads.
Speaking of Trash itself, likely this week starting on Android first there will be a Settings option to disable Trash feature altogether (which is a soft-delete emulation, but slow and pointless if bucket already supports versioning). Versioning UI with restore options will come little bit later. (edited).aainit
file is our write test, as well as ETag response validation (which is required for not yet released syncing features), as some providers (talking mostly about iDrive E2 with SSE enabled) don't generate valid ETags. BTW. Would you like S3Drive to support read-only mode?
Regardless, we will try to improve clarity of this operation, so user feels more confident that we're not doing some shady write/reads.
Speaking of Trash itself, likely this week starting on Android first there will be a Settings option to disable Trash feature altogether (which is a soft-delete emulation, but slow and pointless if bucket already supports versioning). Versioning UI with restore options will come little bit later. (edited).aainit
file it's fine, but I'd prefer if the app saved the test results locally then deleted the file. I want to be able to write files so I wouldn't use a read-only mode, and we can always create read-only access keys if we want to be sure that's how the app will behave! I'm very interested by the share link expiry slider or date picker though, I never share for 7 days, it's either a smaller duration or permanent.
Cool, I don't mind not having the versioning UI yet, but had to delete my file versions + the trash versions to cleanup my bucket so… yeah, trash is cool but I assume most people who want that have versioning enabled. I assume you already have quite a few buckets on various providers to test your features, but I can provide a MinIO one if it could be of interest.
There was a 2nd folder with an HTML page in it, not sure what it was about but same thing I'd say, that's probably the least expected action from an S3 browser… While I audited the actions and indeed didn't find anything malicious, that could get me assassinated by my colleagues if I ever connected a more important bucket to the app. .aainit
file it's fine, but I'd prefer if the app saved the test results locally then deleted the file. I want to be able to write files so I wouldn't use a read-only mode, and we can always create read-only access keys if we want to be sure that's how the app will behave! I'm very interested by the share link expiry slider or date picker though, I never share for 7 days, it's either a smaller duration or permanent.
Cool, I don't mind not having the versioning UI yet, but had to delete my file versions + the trash versions to cleanup my bucket so… yeah, trash is cool but I assume most people who want that have versioning enabled. I assume you already have quite a few buckets on various providers to test your features, but I can provide a MinIO one if it could be of interest.
There was a 2nd folder with an HTML page in it, not sure what it was about but same thing I'd say, that's probably the least expected action from an S3 browser… While I audited the actions and indeed didn't find anything malicious, that could get me assassinated by my colleagues if I ever connected a more important bucket to the app. s3 ls
even though headObject
couldn't retrieve it as a valid S3 entry. I am curious if you came across of something similar. (edited).aainit
file it's fine, but I'd prefer if the app saved the test results locally then deleted the file. I want to be able to write files so I wouldn't use a read-only mode, and we can always create read-only access keys if we want to be sure that's how the app will behave! I'm very interested by the share link expiry slider or date picker though, I never share for 7 days, it's either a smaller duration or permanent.
Cool, I don't mind not having the versioning UI yet, but had to delete my file versions + the trash versions to cleanup my bucket so… yeah, trash is cool but I assume most people who want that have versioning enabled. I assume you already have quite a few buckets on various providers to test your features, but I can provide a MinIO one if it could be of interest.
There was a 2nd folder with an HTML page in it, not sure what it was about but same thing I'd say, that's probably the least expected action from an S3 browser… While I audited the actions and indeed didn't find anything malicious, that could get me assassinated by my colleagues if I ever connected a more important bucket to the app. .aainit
file being nuked (delete file itself + all its versions) once the init is done and raw presigned URL sharing headObject
and get the envelope AES keys.... so it must be a toggle with some warning. It would then simply return the Blob that's stored on S3, regardless of what's inside. (edited).aainit
file it's fine, but I'd prefer if the app saved the test results locally then deleted the file. I want to be able to write files so I wouldn't use a read-only mode, and we can always create read-only access keys if we want to be sure that's how the app will behave! I'm very interested by the share link expiry slider or date picker though, I never share for 7 days, it's either a smaller duration or permanent.
Cool, I don't mind not having the versioning UI yet, but had to delete my file versions + the trash versions to cleanup my bucket so… yeah, trash is cool but I assume most people who want that have versioning enabled. I assume you already have quite a few buckets on various providers to test your features, but I can provide a MinIO one if it could be of interest.
There was a 2nd folder with an HTML page in it, not sure what it was about but same thing I'd say, that's probably the least expected action from an S3 browser… While I audited the actions and indeed didn't find anything malicious, that could get me assassinated by my colleagues if I ever connected a more important bucket to the app. .s3drive_bucket_read_test
) and verify the response instead of trying to write a file.
Slider now works, so it's possible to set expiry time shorter than maximum of 7 days. There is an option to use raw preshared URLs.
We've also introduced basic Version UI. It is now possible to preview the revisions. In a next update we will allow opening, preview, deleting and restoring to particular version.
Thank you for these suggestions, they were great and helped us to validate it all !
... and as always we're open for a feedback.folder/file.txt
, but folder/
entry doesn't explicitly exists, it is still searchable)
There is an option to hide files starting with: .
As usual there are couple other performance improvements and bugfixes.
We would love to hear how are you finding new changes and if version management during file operations is what you would expect. (edited)Hide "." files
Show all files, including starting with the dot.
Hide files starting with the dot character
To:
Hide dotfiles
Show all files, including ones starting with a dot.
Hide files starting with the dot character.
feature_flags
int that computes to an array of pro features with bitwise operations, easy on your API and authentication gateway or whatever you do behind the scenes.feature_flags
int that computes to an array of pro features with bitwise operations, easy on your API and authentication gateway or whatever you do behind the scenes. MinioError: ListObjectsV2 search parameter maxKeys not implemented
(edited)s3.<region>.amazonaws.com
OS Error: CERTIFICATE_VERIFY_FAILED: self signed certificate
. And I indeed have self-signed certificate but I followed your instructions from https://github.com/s3drive/app/issues/19 (https://proxyman.io/posts/2020-09-29-Install-And-Trust-Self-Signed-Certificate-On-Android-11) and my browser on Andriod recognizes this certificate (if I go to minio browser, my Chrome is fine with the cert). But S3Drive continues to fail with the same error.
I'm using the latest version. (edited)null
response, which is somewhat expected. I would expect to get the SSL related error instead.OS Error: CERTIFICATE_VERIFY_FAILED: self signed certificate
. And I indeed have self-signed certificate but I followed your instructions from https://github.com/s3drive/app/issues/19 (https://proxyman.io/posts/2020-09-29-Install-And-Trust-Self-Signed-Certificate-On-Android-11) and my browser on Andriod recognizes this certificate (if I go to minio browser, my Chrome is fine with the cert). But S3Drive continues to fail with the same error.
I'm using the latest version. (edited)support-bugs-requests
is too long but there's no reason to have multiple channels for that either# Obscure password
echo "YourPlaintextPassword" | rclone obscure -
# Add it to Rclone config, config file location: `rclone config file`
[s3drive_remote]
type = s3
provider = Other
access_key_id = <access_key_id>
secret_access_key = <secret_access_key>
endpoint = <endpoint>
region = <region>
[s3drive_crypt]
type = crypt
filename_encoding = base64
remote = s3drive_remote:<bucket_name>
password = <obscuredPassword>
filename_encryption = standard
directory_name_encryption = true
suffix = none
Then you can use: s3drive_crypt
as your remote encrypted location.
Please note that whilst we support both encrypted and unencrypted files in the same location, Rclone doesn't seem to like the mix and won't display existing unencrypted files for the encrypted remote. In such case it's better to either keep everything encrypted globally or have dedicate paths with encrypted-only or unencrypted-only files. (edited)filename_encoding = base64
suffix = none
By default the Rclone's encoding is base32: https://github.com/rclone/rclone/blob/88c72d1f4de94a5db75e6b685efdbe525adf70b8/backend/crypt/crypt.go#L140 unless overriden by the config creator.filename_encoding = base64
suffix = none
By default the Rclone's encoding is base32: https://github.com/rclone/rclone/blob/88c72d1f4de94a5db75e6b685efdbe525adf70b8/backend/crypt/crypt.go#L140 unless overriden by the config creator. filename_encoding = base64
suffix = none
By default the Rclone's encoding is base32: https://github.com/rclone/rclone/blob/88c72d1f4de94a5db75e6b685efdbe525adf70b8/backend/crypt/crypt.go#L140 unless overriden by the config creator. {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::${aws:username}"
]
},
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::${aws:username}/*"
]
}
]
}
${aws:username}
by anything you want, be it a variable or a fixed bucket name, there unfortunately isn't any group name variableusers
group to which I assign the selfservice
policy, then I add whoever I want to the users
group and they'll be able to manage their very own bucket${aws:username}
by anything you want, be it a variable or a fixed bucket name, there unfortunately isn't any group name variable {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::${aws:username}",
"arn:aws:s3:::${aws:username}/*"
]
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::${aws:username}"
]
},
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::${aws:username}/*"
]
}
]
}
Contributor
role, it isn't much but still a nice way to recognize individuals who go out of their way to help the project out, what do you think about it?Contributor
role, it isn't much but still a nice way to recognize individuals who go out of their way to help the project out, what do you think about it? AppImage
you can find deb
package in the releases: https://github.com/s3drive/app/releases if that's any use for you.czNkcml2ZQ==
using this command: echo "czNkcml2ZQ==" | base64 -d | rclone obscure -
you can generate a password, e.g.: AQbZ5H8mrzlnkNj9MXnjpxS5QmxbRpw
which can be used in Rclone config: rclone config file
as indicated in this post: https://discord.com/channels/1069654792902815845/1069654792902815848/1135157727216279585
Speaking of decryption speeds in browser, let's continue in the support item that I've created: https://discord.com/channels/1069654792902815845/1140911911479808081 (edited)rclone password dump
gives obscured password. You need to use your original text password. Alternatively you'll need to use "password reveal" on your obscured password.
https://forum.rclone.org/t/how-to-retrieve-a-crypt-password-from-a-config-file/20051
We're not supporting Rclone 2nd password, but it's part of our roadmap: https://s3drive.canny.io/feature-requests/p/support-2nd-rclone-crypt-password
We're supporting default Rclone salt: https://forum.rclone.org/t/how-to-correctly-setup-the-salt-for-the-crypt-remote/4273/2
I've created additional two roadmap items to support your use case:
https://s3drive.canny.io/feature-requests/p/add-support-for-custom-rclone-salt
https://s3drive.canny.io/feature-requests/p/add-option-to-restore-rclone-password
Please vote on them, so the priority is pushed higher.
If you have any more issues with S3Drive, please create a support item: https://discord.com/channels/1069654792902815845/1102236355645419550
Thanks (edited)Support for AWS S3, Backblaze, Wasabi, Scaleway, Storj, MinIO and any other S3 compatible provider. Sign-up for a free account, no credit card required.Aug 22, 2023
S3Drive: Cloud storage on the App Store - Apple
1.5.3
- https://s3drive.app/changelog
Please try now install newest DMG from our website. It should resolve your issues.
What message did you get exactly from the app? Was it update more recent version available or perhaps that your version has expired? (edited)FCKGW-RHQQ2...
license.
This would either mean that you would have to generate some activation key on our website from time to time and paste it to the app... or once you activate features in your app with some activation key you would have to deactivate it before you could use it on some other Window's client.tocloud, // Upload to remote, delete remotely if file was deleted locally
tocloud_keepdeleted, // Won't remove file remotely if it was deleted locally
tocloud_compat, // If file is removed remotely, local won't know that, it will be reuploaded on a next ocassion
In principle:
"To remote" will upload file to remote and delete it remotely if it was deleted locally. If file is deleted remotely it won't get re-uploaded again.
"To remote (don't delete remotely)" - the same as "To remote", except it will keep file on the remote even if it was deleted locally.
The above 2 options require bucket versioning support.
The "compatibility mode" doesn't require versioning API, however that makes it not aware of any file changes in between, so it's simply blind one way copy instead of sync.
I hope that helps little bit. We'll build documentation once we sort couple challenges related to E2E encryption with syncing, as depending how we manage to solve these problems it may influence the available options.[
{
"bucketName": "acme-internal-files",
"keyId": "EVLJ2eXJukWUR9U17dyQqq6NPTi9mUu6scqpLCau",
"applicationKey": "X9EiaepygvDK2S0fmMmFayehHoETDOphNP1r96PI",
"endpoint": "https://s3.us-west-004.backblazeb2.com",
"region": "us-west-004",
"host": "s3.us-west-004.backblazeb2.com",
"port": 443,
"useSSL": true,
"encryptionKey": "cG90YXRv",
"rclonePlaintextKey": true,
"filepathEncryptionEnabled": true,
"rcloneDerivedKey": [
116,
85,
199,
26,
177,
124,
134,
91,
132,
...
]
}
]
This may be a good start.
We plan to implement QR code login, but the QR size limitation makes QR not a solution for all use cases.
There are other means, e.g. QR code could transfer the "place holder ID" which would be then used to fetch the required details, but then again this setup would require more moving parts.
We're very much open on this. (edited)[
{
"bucketName": "acme-internal-files",
"keyId": "EVLJ2eXJukWUR9U17dyQqq6NPTi9mUu6scqpLCau",
"applicationKey": "X9EiaepygvDK2S0fmMmFayehHoETDOphNP1r96PI",
"endpoint": "https://s3.us-west-004.backblazeb2.com",
"region": "us-west-004",
"host": "s3.us-west-004.backblazeb2.com",
"port": 443,
"useSSL": true,
"encryptionKey": "cG90YXRv",
"rclonePlaintextKey": true,
"filepathEncryptionEnabled": true,
"rcloneDerivedKey": [
116,
85,
199,
26,
177,
124,
134,
91,
132,
...
]
}
]
This may be a good start.
We plan to implement QR code login, but the QR size limitation makes QR not a solution for all use cases.
There are other means, e.g. QR code could transfer the "place holder ID" which would be then used to fetch the required details, but then again this setup would require more moving parts.
We're very much open on this. (edited)host
/ gateway
, or if you want to set the encryption key, both encryptionKey
and generated: rcloneDerivedKey
must be provided.
If there is a need we could certainly simplify the format, so things gets smartly derived if not present. (edited)host
/ gateway
, or if you want to set the encryption key, both encryptionKey
and generated: rcloneDerivedKey
must be provided.
If there is a need we could certainly simplify the format, so things gets smartly derived if not present. (edited)encryptionKey
field is required to setup the encryption.
Speaking of decryption, it's an open format. Naturally you can use S3Drive (on any platform) to access encrypted data (you'll need to access bucket with data and setup E2E with the same password that was initially used for encryption).
You can also mount data using as network drive (that's possible from S3Drive after clicking on tray icon).
Alternatively you can access data using rclone
command, as we're 1:1 compatible with their encryption: https://rclone.org/crypt/#file-encryption
In that case please visit our docs to understand how you can set up rclone
command: https://docs.s3drive.app/advanced/#setup-with-rclone
Then you would be able to use commands like copy: https://rclone.org/commands/rclone_copy/ or sync: https://rclone.org/commands/rclone_sync/ or couple others depending on your needs.
There are couple options out there. (edited)[
{
"bucketName": "bucket-photos",
"keyId": "keyId",
"applicationKey": "applicationKey",
"endpoint": "https://s3.pl-waw.scw.cloud",
"encryptionKey": "cG90YXRv"
}
]
This would configure all necessary things and enable encryption with password: potato
, the encryptionKey
is base64
encoded plaintext password. (edited)[
{
"bucketName": "bucket-photos",
"keyId": "keyId",
"applicationKey": "applicationKey",
"endpoint": "https://s3.pl-waw.scw.cloud",
"encryptionKey": "cG90YXRv"
}
]
This would configure all necessary things and enable encryption with password: potato
, the encryptionKey
is base64
encoded plaintext password. (edited)zenity
, qarma
and kdialog
.
https://github.com/miguelpruivo/flutter_file_picker/issues/1282#issuecomment-1551924613
I will add this item to our internal items and try to play around in Xubuntu. In the meantime would you be happy to try out the Flathub version? https://flathub.org/en-GB/apps/io.kapsa.drive (Please note that it awaits 1.6.4 release which will be likely available later today or tomorrow). (edited)zenity
or kdialog
on your OS and see if it solves the issue?
It it does we will add it as a dependency to .AppImage
.
https://forum.juce.com/t/native-filechooser-not-used-on-linux-xfce/26347zenity
or kdialog
on your OS and see if it solves the issue?
It it does we will add it as a dependency to .AppImage
.
https://forum.juce.com/t/native-filechooser-not-used-on-linux-xfce/26347 zenity
in our releases.zenity
in our releases. zenity
in our releases. {
"url": "/api/create-checkout-session",
"data": {
"price": {
"id": "price_1NyfLNEv31gUd4RDtzV41wix",
"interval": "year",
"currency": "EUR",
"unit_amount": 0
}
},
"res": {}
}
(edited)content://
, since we operate on network resources, what we get with S3 is just a network URL, that we don't store locally (except the video cache) and pass directly to the video player. Since data isn't stored on Android device locally I don't think there is a method to expose it as a content URI.
If I understand little bit more about your use case I might be able to come up with some other approach. (edited)glxinfo | grep "direct rendering"
? (edited)glxinfo | grep "direct rendering"
? (edited)glxinfo | grep "direct rendering"
? (edited)file.png
to test/
would rename it to testfile.png
and the file is not moved the directory1.7.1
sync feature to be able to interact with the local FS.file.png
to test/
would rename it to testfile.png
and the file is not moved the directory 1.7.0
, we've now prioritized this and shall be able to release a hotfix at some point today.libmpv2
as an alternative, but don't really have capacity at the moment to test things out.
Ideally movies should play out as normal, as MPV dependency is required by media library that we use: https://pub.dev/packages/media_kit (edited)libmpv2
as an alternative, but don't really have capacity at the moment to test things out.
Ideally movies should play out as normal, as MPV dependency is required by media library that we use: https://pub.dev/packages/media_kit (edited)libmpv
version. We're working to have it resolved promptly, please bear with us.git clone --recursive git@github.com:flathub/io.kapsa.drive.git
cd io.kapsa.drive
flatpak-builder --user --install --force-clean build-dir io.kapsa.drive.json
... however it does require some prior environment setup, like:
flatpak install flathub org.freedesktop.Sdk//23.08
flatpak install flathub org.freedesktop.Platform
flatpak install org.freedesktop.Sdk.Extension.vala/x86_64/23.08
We will be providing full guide, "how to compile Flatpak". (edited)git clone --recursive git@github.com:flathub/io.kapsa.drive.git
cd io.kapsa.drive
flatpak-builder --user --install --force-clean build-dir io.kapsa.drive.json
... however it does require some prior environment setup, like:
flatpak install flathub org.freedesktop.Sdk//23.08
flatpak install flathub org.freedesktop.Platform
flatpak install org.freedesktop.Sdk.Extension.vala/x86_64/23.08
We will be providing full guide, "how to compile Flatpak". (edited)./S3Drive-x86_64.AppImage
(kapsa:2730352): Gdk-CRITICAL **: 09:39:57.636: gdk_window_get_state: assertion 'GDK_IS_WINDOW (window)' failed
package:media_kit_libs_linux registered.
flutter: *** sqflite warning ***
You are changing sqflite default factory.
Be aware of the potential side effects. Any library using sqflite
will have this factory as the default for all operations.
*** sqflite warning ***
method call InitAppWindow
method call InitSystemTray
SystemTray::set_system_tray_info title: (null), icon_path: /tmp/.mount_S3DrivJ2GgY2/data/flutter_assets/assets/logos/logo_42.png, toolTip: (null)
method call CreateContextMenu
value_to_menu_item type:label, label:Show
value_to_menu_item type:label, label:Hide
value_to_menu_item type:label, label:Start drive mount
value_to_menu_item type:label, label:Stop drive mount
value_to_menu_item type:label, label:Start WebDav
value_to_menu_item type:label, label:Stop WebDav
value_to_menu_item type:label, label:Support
value_to_menu_item type:label, label:Visit Website
value_to_menu_item type:label, label:About
value_to_menu_item type:label, label:Changelog
value_to_menu_item type:label, label:Logs
value_to_menu_item type:label, label:Version 1.7.11
method call SetContextMenu
Just a question, did you try running Flatpak format? https://github.com/flathub/io.kapsa.drive/./S3Drive-x86_64.AppImage
(kapsa:2730352): Gdk-CRITICAL **: 09:39:57.636: gdk_window_get_state: assertion 'GDK_IS_WINDOW (window)' failed
package:media_kit_libs_linux registered.
flutter: *** sqflite warning ***
You are changing sqflite default factory.
Be aware of the potential side effects. Any library using sqflite
will have this factory as the default for all operations.
*** sqflite warning ***
method call InitAppWindow
method call InitSystemTray
SystemTray::set_system_tray_info title: (null), icon_path: /tmp/.mount_S3DrivJ2GgY2/data/flutter_assets/assets/logos/logo_42.png, toolTip: (null)
method call CreateContextMenu
value_to_menu_item type:label, label:Show
value_to_menu_item type:label, label:Hide
value_to_menu_item type:label, label:Start drive mount
value_to_menu_item type:label, label:Stop drive mount
value_to_menu_item type:label, label:Start WebDav
value_to_menu_item type:label, label:Stop WebDav
value_to_menu_item type:label, label:Support
value_to_menu_item type:label, label:Visit Website
value_to_menu_item type:label, label:About
value_to_menu_item type:label, label:Changelog
value_to_menu_item type:label, label:Logs
value_to_menu_item type:label, label:Version 1.7.11
method call SetContextMenu
Just a question, did you try running Flatpak format? https://github.com/flathub/io.kapsa.drive/ 1.7.16
released, with the next release awaiting Microsoft approval.echo "secretpassword" | rclone obscure -
Can you provide your full Rclone config for your remote / back-end and crypt (remove your password sensitive credentials / access key etc.)echo "secretpassword" | rclone obscure -
Can you provide your full Rclone config for your remote / back-end and crypt (remove your password sensitive credentials / access key etc.) rclone version
, I guess you've provided S3Drive version? 1.6.5
, I can't recall exactly, but there was some issue with S3Drive <> Rclone compatibility below that version.
Would you be keen to upgrade your Rclone version and see if that config works for you?1.6.5
, I can't recall exactly, but there was some issue with S3Drive <> Rclone compatibility below that version.
Would you be keen to upgrade your Rclone version and see if that config works for you? directory_name_encryption = true
- do you also have filename/filepath encryption enabled on the S3Drive side?directory_name_encryption = true
- do you also have filename/filepath encryption enabled on the S3Drive side? 1.6.5
, I can't recall exactly, but there was some issue with S3Drive <> Rclone compatibility below that version.
Would you be keen to upgrade your Rclone version and see if that config works for you? /home/user/.ssh
folder. Thanks./home/user/.ssh
folder. Thanks. rsync -av --exclude='cache' --exclude='build' source dest
to sync data to other local machine and then archive things and send it compressed and password protected to Backblaze:
7z -mhc=on -mhe=on -pVeryHardPasswordHere a $folder.7z /home/tom/$folder/*
AWS_ACCESS_KEY_ID=<key> AWS_SECRET_ACCESS_KEY=<access> aws --endpoint https://s3.eu-central-001.backblazeb2.com s3 cp $folder.7z s3://my-backup-bucket
I use S3Drive to backup media from my phone to cloud and for online access to other media files (mostly older photos).
I am yet to find perfect backup strategy for photos, but I would say at this stage bigger problem is to keep things tidy, organized and deduplicated.
Eventually I will get to that. (edited)server_side_encryption = aws:kms
in the config, which we've checked solves the issue, the challenge is that we don't know if user actually enabled that setting on the iDrive side.
The quick fix is to turn off the: "Default encryption" setting for the iDrive bucket, then the mount shall upload objects to iDrive without issues.
We need to spend more time on this to research if we can detect this setting or whether we need to implement prompt/question for the user and provide configurable setting. (edited)server_side_encryption = aws:kms
in the config, which we've checked solves the issue, the challenge is that we don't know if user actually enabled that setting on the iDrive side.
The quick fix is to turn off the: "Default encryption" setting for the iDrive bucket, then the mount shall upload objects to iDrive without issues.
We need to spend more time on this to research if we can detect this setting or whether we need to implement prompt/question for the user and provide configurable setting. (edited)Writes
or Full
(combined with not yet configurable via S3Drive: vfs-cache-max-age
setting, default 1h; In other words after 1h of not accessing files, they will be evicted from cache).
If you switch to: "Old mount experience" in the Settings and have Rclone CLI installed, you can then lookup the exact command in the Logs and play with the settings yourself (based on this doc: https://rclone.org/commands/rclone_mount/#vfs-file-caching)
We could then provide more configuration options in S3Drive ... or you are free to keep using Rclone outside of the S3Drive ecosystem. (edited)zenity
package missing on the host OS, alternatively kdialog
can be installed. What's your OS? (edited)zenity
package missing on the host OS, alternatively kdialog
can be installed. What's your OS? (edited)/home/jeannesbond/S3Drive
exist on your machine?
I would also recommend using external S3 account: https://docs.s3drive.app/setup/providers/ instead of testing account, as it's not always stable enough just yet.
It's great you've included logs !"smb": {
"host": "smb.hostname.com",
"pass": "<obscuredPass>",
"type": "smb",
"user": "usersomething"
}
Then you can set up Sync (from/to) or use the back-end in a same way as any other Rclone within S3Drive. (edited)"smb": {
"host": "smb.hostname.com",
"pass": "<obscuredPass>",
"type": "smb",
"user": "usersomething"
}
Then you can set up Sync (from/to) or use the back-end in a same way as any other Rclone within S3Drive. (edited)trashed_only = true
)
Stay tuned for the updates, in the meantime if you have any feedback don't hesitate to reach out.
... also I would like to thank you for your input. If you have registered an account I would happily assign you Ultimate license for one year - if that's something that would interest you. (edited)trashed_only = true
)
Stay tuned for the updates, in the meantime if you have any feedback don't hesitate to reach out.
... also I would like to thank you for your input. If you have registered an account I would happily assign you Ultimate license for one year - if that's something that would interest you. (edited)base
that our filesize
library is using, is 1024
instead of 1000
and there was a rounding issue as well. Expect this to be fixed in a next release.
Rclone initialization failed. Please contact support[...]
which indicates that after multiple tries the initialization failed, then nothing to worry about.
Rclone initialization failed. Please contact support[...]
which indicates that after multiple tries the initialization failed, then nothing to worry about.
Rclone initialization failed. Please contact support[...]
which indicates that after multiple tries the initialization failed, then nothing to worry about.
"kmsEncryption":true
in the json config, but may I also suggest writing server_side_encryption = aws:kms
in the rclone configserver_side_encryption = aws:kms
in the rclone config manually will be overwritten by s3drive removing it"kmsEncryption":true
in the json config, but may I also suggest writing server_side_encryption = aws:kms
in the rclone config kmsEncryption
is set to true
in the config, then we should already be setting: server_side_encryption = aws:kms
in the Rclone config.
Does S3Drive behave differently?
The issue that we're aware of is that we only display dialog which sets the: kmsEncryption
value when you mount a drive (we ask that for AWS and iDrive only).
We need to fix that, so dialog is displayed also for Sync and other functionalities which internally use Rclone.
Even though I don't necessarily recommend modifying app's config, a temporary solution might be setting: kmsEncryption: true
in the config (ideally when app is disabled) and then starting app.
What's your S3 provider by the way? (edited)server_side_encryption
being set in the Rclone config.
Isn't what you finding? (edited)server_side_encryption
setting properly. I don't know what happened. I still have the json config timestamped Sunday, 21 April 2024, 12:57:39 AM with "kmsEncryption":true,
inside of it.Rclone initialization failed. Please contact support[...]
which indicates that after multiple tries the initialization failed, then nothing to worry about.
rclone
first (there is no streaming interface that we could use). (edited)rclone
first (there is no streaming interface that we could use). (edited)S3Drive
and the name that we've chosen back in 2022 probably doesn't help here.
As such S3Drive is a simple back-end for Rclone, technically in some ways it is a GUI that sits on top of Rclone, but that's our additional feature, not the core one.
The core one revolves around S3 support and storage plans will be available later this year.
We still plan to expand support for Rclone back-ends, including preview, thumbnails etc.NoSuchKey
using XML format.
hcm.s3storage.vn
on the other hand returns invalid error code (500 instead of 4xx) and invalid format, HTML instead of XML:
Server: HyperCoreS3
<html>
<head><title>500 Internal Server Error</title></head>
<body>
<center><h1>500 Internal Server Error</h1></center>
<hr><center>openresty/1.15.8.3</center>
</body>
</html>
Once I've skipped the S3Drive read check (not possible currently from the app itself) I've actually managed to run couple actions, that is, list, copy/rename, delete.
... so there are two non-exclusive solutions.
1. Contact: hcm.s3storage.vn
so they can fix the issue with their S3 API and make it compliant with the standard.
2. S3Drive to allow user to skip the read check <--- this is something we would be willing to allow, but it would take use a while, as we're busy with other work at the moment.NoSuchKey
using XML format.
hcm.s3storage.vn
on the other hand returns invalid error code (500 instead of 4xx) and invalid format, HTML instead of XML:
Server: HyperCoreS3
<html>
<head><title>500 Internal Server Error</title></head>
<body>
<center><h1>500 Internal Server Error</h1></center>
<hr><center>openresty/1.15.8.3</center>
</body>
</html>
Once I've skipped the S3Drive read check (not possible currently from the app itself) I've actually managed to run couple actions, that is, list, copy/rename, delete.
... so there are two non-exclusive solutions.
1. Contact: hcm.s3storage.vn
so they can fix the issue with their S3 API and make it compliant with the standard.
2. S3Drive to allow user to skip the read check <--- this is something we would be willing to allow, but it would take use a while, as we're busy with other work at the moment. username
and password
from the Rclone configuration after setting up Proton, please see my comment here: https://www.reddit.com/r/ProtonMail/comments/18s211d/comment/kzfqub7/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button
I haven't used that myself long enough, it may happen that at some point username
and password
will be required for the resetup if by any chance: client_refresh_token
expires.
In the future we will allow password striping in-app, so no manual step is required.
c) In general you may monitor any undesired file access to config file, on Windows: C:\Users\<user>\AppData\Roaming\rclone\rclone.conf
, as this is where sensitive data is stored.
In the future we will support Rclone encrypted config: https://rclone.org/docs/#configuration-encryption (edited)\\server\s3drive_proton
.
In 1.9.2 version which is available as pre-release (and will be released to general public in a few days): https://github.com/s3drive/windows-app/releases/tag/1.9.2 we've added an option to disable network (and made it default setting) mount in case you don't want to share it.
In order to start mount after reboot, you can use combination of: "Launch app at startup" and then: "Mount drive after app starts".
In the future we will add an option to start the app in tray: https://s3drive.canny.io/feature-requests/p/desktop-app-minimize-to-tray-dont-close so S3Drive windows doesn't pop up each time. (edited)gobbledygook
is (edited)support
in s3drive.app
domain. It's indeed good to know that it somewhat works on GrapheneOS, as we haven't had a chance to try it out just yet.
Background sync should work fine on Android, but as always the devil's is in the details, that is battery settings and how aggressively phone manufacturer interferes with the background framework... there is always a possibility that we've introduced some bug or there is some edge case that we haven't handled yet. We would be happy to try reproducing the "background" issue that you're experiencing and work on the solution.
What's your back-end type, is that S3 or Rclone? Do you have E2E enabled? (edited)masterkey.cryptomator
located in the root directory of the vault. If for some reason this file isn't synced in your vault you won't be able to decrypt your files: https://docs.cryptomator.org/en/latest/security/architecture/#masterkey-file
Directory contents move/rename protection isn't present for Rclone cipher, which I've mentioned here: https://github.com/rclone/rclone/issues/7192 (scroll down to: 4. No path protection), so there is no risk that data will be corrupted during move/rename.
We receive bug reports frequently from our users, but there was no data corruption being mentioned (other than lost password), if that helps you to feel reassured. (edited)detect
, I mean that encryption scheme in both cases has built-in mechanism to verify data integrity, so it's not possible to flip bit of data in the middle of file (HDD corruption or deliberate attack by adversary) and expect that this will be undetected.
In some use cases user must be ensured that content that they've encrypted actually belongs to them and wasn't altered. E.g. some legal text or some evidence etc.masterkey.cryptomator
located in the root directory of the vault. If for some reason this file isn't synced in your vault you won't be able to decrypt your files: https://docs.cryptomator.org/en/latest/security/architecture/#masterkey-file
Directory contents move/rename protection isn't present for Rclone cipher, which I've mentioned here: https://github.com/rclone/rclone/issues/7192 (scroll down to: 4. No path protection), so there is no risk that data will be corrupted during move/rename.
We receive bug reports frequently from our users, but there was no data corruption being mentioned (other than lost password), if that helps you to feel reassured. (edited)support
at s3drive.app
or DM myself.
support
at s3drive.app
or DM myself.
.empty
file as a single file from which its parent folder prefix is born.
Alternatively we could try to insert: folder/
key as a folder placeholder, but that's not universally support across providers and isn't cross-compatible approach. E.g. MinIO (most common self-hosted file-based S3 server doesn't support it).
1.9.3
that we're just releasing (on macOS it shall be available within ~2 days) there is improved concurrency for multipart uploads: https://s3drive.app/changelog which might make multipart uploads faster than if it was disabled (it was the other round previously).
The speed bottleneck for single big file might still be single-threaded encryption XSalsa20-Poly1305 (which is lighter than AES-GCM used by Cryptomator, but doesn't have benefits of hardware encryption which AES has).
For multiple files upload encryption speed this shouldn't be an issue, as encryption load would be spread evenly on multiple CPU threads/cores.
In the next releases we will be improving single big file encryption speeds, by providing multi-threaded chunk encryption. We're actively improving in all these areas. (edited).empty
file as a single file from which its parent folder prefix is born.
Alternatively we could try to insert: folder/
key as a folder placeholder, but that's not universally support across providers and isn't cross-compatible approach. E.g. MinIO (most common self-hosted file-based S3 server doesn't support it).
1.9.3
that we're just releasing (on macOS it shall be available within ~2 days) there is improved concurrency for multipart uploads: https://s3drive.app/changelog which might make multipart uploads faster than if it was disabled (it was the other round previously).
The speed bottleneck for single big file might still be single-threaded encryption XSalsa20-Poly1305 (which is lighter than AES-GCM used by Cryptomator, but doesn't have benefits of hardware encryption which AES has).
For multiple files upload encryption speed this shouldn't be an issue, as encryption load would be spread evenly on multiple CPU threads/cores.
In the next releases we will be improving single big file encryption speeds, by providing multi-threaded chunk encryption. We're actively improving in all these areas. (edited).empty
is used to make sure the folder can be identified If I'm not mistaken. And no, I'm using Windows and yes E2E encryption is enabled, as for multipart upload I decided to disabled it because It's much slower and I used the app because WebDav is slower.
I see that make sense, well I'm excited for the new update!.empty
is used to make sure the folder can be identified If I'm not mistaken. And no, I'm using Windows and yes E2E encryption is enabled, as for multipart upload I decided to disabled it because It's much slower and I used the app because WebDav is slower.
I see that make sense, well I'm excited for the new update! 1.9.3
is released on Windows, feel free to try out multipart mode. In next releases we will be tweaking concurrency settings, further improving multipart mode and expose settings to the user, so they can tweak it according to their desired use.
If you face any issues with the app or would like to submit a new request, please visit: #support
Thanks ! (edited)rclone obscure
is meant to prevent "eyedropping" only. Imagine someone watches you behind your back.rclone config file
path, which likely resolves to: /home/<user>/.config/rclone/rclone.conf
on Ubuntu.
We will be implementing config encryption in order to secure the Rclone config: https://rclone.org/docs/#configuration-encryption
EDIT: Added feature request: https://s3drive.canny.io/feature-requests/p/rclone-encrypted-config (edited)support
within s3drive.app
domain. (edited)support
within s3drive.app
domain. (edited)1.9.6
, you can always find previous versions on our Github page: https://github.com/s3drive/windows-app/releases
By any chance, does issue exist on 1.9.4
as well?1.9.6
, you can always find previous versions on our Github page: https://github.com/s3drive/windows-app/releases
By any chance, does issue exist on 1.9.4
as well? rclone config file
path, which likely resolves to: /home/<user>/.config/rclone/rclone.conf
on Ubuntu.
We will be implementing config encryption in order to secure the Rclone config: https://rclone.org/docs/#configuration-encryption
EDIT: Added feature request: https://s3drive.canny.io/feature-requests/p/rclone-encrypted-config (edited)Connect
link.
support
in s3drive.app
domain. (edited)glibc
and its own dependencies it gets trickier... and the solution is actually to use the oldest possible OS to build the AppImage itself, but then app relies on libmpv2
, whereas the older build system has only libmpv1
available and so on... and with FUSE
I don't even remember the problem.XSalsa20-Poly1305
, Kopia uses: CHACHA20-POLY1305-HMAC-SHA256
.
It also uses different storage layout.
https://kopia.io/docs/advanced/architecture/#content-addressable-block-storage-cabs
and
https://kopia.io/docs/advanced/architecture/#content-addressable-object-storage-caos
that we would have to parse/decrypt and understand in order to produce file list viewable to the user.
Definitely possible, but given our workload and other features we would spread ourselves too thin if we've started work on that
Added feature item though: https://s3drive.canny.io/feature-requests/p/add-suport-for-reading-kopia-backupsXSalsa20-Poly1305
, Kopia uses: CHACHA20-POLY1305-HMAC-SHA256
.
It also uses different storage layout.
https://kopia.io/docs/advanced/architecture/#content-addressable-block-storage-cabs
and
https://kopia.io/docs/advanced/architecture/#content-addressable-object-storage-caos
that we would have to parse/decrypt and understand in order to produce file list viewable to the user.
Definitely possible, but given our workload and other features we would spread ourselves too thin if we've started work on that
Added feature item though: https://s3drive.canny.io/feature-requests/p/add-suport-for-reading-kopia-backups View
button to access your billing details on Stripe, where you can manage your subscriptionView
button to access your billing details on Stripe, where you can manage your subscription s3drive://sync
might actually open the Sync screen directly without necessary user input in the drawer.1.9.11
release as well which aims to fix the background (for media uploads) after Android 14 imposed certain restrictions and it's broke since we updated SDK recently.
We were forced by Google to use latest SDK by the end of August ... but our existing background processing approach specific approval from Google: https://developer.android.com/about/versions/14/behavior-changes-14#fgs-types and https://stackoverflow.com/a/77186316 (edited)s3drive://sync
might actually open the Sync screen directly without necessary user input in the drawer. subfolder
you would see something like this e.g. Files > folder > subfolder
.
If you tap on folder
you would go one level up. In some cases the level might be hidden behind: ...
if there isn't enough space in this bar, in such case you can tap it to unhide it and then swipe left/right to see all levels.subfolder
you would see something like this e.g. Files > folder > subfolder
.
If you tap on folder
you would go one level up. In some cases the level might be hidden behind: ...
if there isn't enough space in this bar, in such case you can tap it to unhide it and then swipe left/right to see all levels. Library
path of the application. It's using native iOS sandbox security, so as long as your device isn't compromised or rooted/jailbreaked (not that we're against of it, but user needs to understand the security implications) you should be fine.
We also plan to add lock screen: https://s3drive.canny.io/feature-requests/p/lock-screen-pin-biometric-face-id (edited)Library
path of the application. It's using native iOS sandbox security, so as long as your device isn't compromised or rooted/jailbreaked (not that we're against of it, but user needs to understand the security implications) you should be fine.
We also plan to add lock screen: https://s3drive.canny.io/feature-requests/p/lock-screen-pin-biometric-face-id (edited)rclone.conf
?rclone.conf
? cat rclone.conf | jc --ini
. If you need path to your Rclone config you can type: rclone config file
.
If you use any online converter instead, stripy any sensitive parts before pasting data online.
Native .ini wil be supported at some point: https://s3drive.canny.io/feature-requests/p/add-support-for-rcloneconf-import-and-ini-format (edited)cat rclone.conf | jc --ini
. If you need path to your Rclone config you can type: rclone config file
.
If you use any online converter instead, stripy any sensitive parts before pasting data online.
Native .ini wil be supported at some point: https://s3drive.canny.io/feature-requests/p/add-support-for-rcloneconf-import-and-ini-format (edited)jc
?Writes
caching mode (available in Profile settings), otherwise you could use either Writes
or Full
to speed up reads for other users using mount, as files would be cached locally.
At the moment the cache size is limited to 1024M, but in the future depending on user's input we could make this param configurable.
If your connection is fast enough (and your remote server is close), then you might not even need the cache, as operations would be fast enough.
I would be keen to know if such setup works for you. If you have any other questions regarding S3Drive, I would be glad to help.Writes
caching mode (available in Profile settings), otherwise you could use either Writes
or Full
to speed up reads for other users using mount, as files would be cached locally.
At the moment the cache size is limited to 1024M, but in the future depending on user's input we could make this param configurable.
If your connection is fast enough (and your remote server is close), then you might not even need the cache, as operations would be fast enough.
I would be keen to know if such setup works for you. If you have any other questions regarding S3Drive, I would be glad to help. support
at s3drive.app
(or DM me directly) with some numbers we would certainly be able to get you some discount. Thanks! (edited)